The Bottom Line
There are two sealed boxes up for auction, box A and box B. One and only one of these boxes contains a valuable diamond. There are all manner of signs and portents indicating whether a box contains a diamond; but I have no sign which I know to be perfectly reliable. There is a blue stamp on one box, for example, and I know that boxes which contain diamonds are more likely than empty boxes to show a blue stamp. Or one box has a shiny surface, and I have a suspicion—I am not sure—that no diamond-containing box is ever shiny.
Now suppose there is a clever arguer, holding a sheet of paper, and they say to the owners of box A and box B: “Bid for my services, and whoever wins my services, I shall argue that their box contains the diamond, so that the box will receive a higher price.” So the box-owners bid, and box B’s owner bids higher, winning the services of the clever arguer.
The clever arguer begins to organize their thoughts. First, they write, “And therefore, box B contains the diamond!” at the bottom of their sheet of paper. Then, at the top of the paper, the clever arguer writes, “Box B shows a blue stamp,” and beneath it, “Box A is shiny,” and then, “Box B is lighter than box A,” and so on through many signs and portents; yet the clever arguer neglects all those signs which might argue in favor of box A. And then the clever arguer comes to me and recites from their sheet of paper: “Box B shows a blue stamp, and box A is shiny,” and so on, until they reach: “and therefore, box B contains the diamond.”
But consider: At the moment when the clever arguer wrote down their conclusion, at the moment they put ink on their sheet of paper, the evidential entanglement of that physical ink with the physical boxes became fixed.
It may help to visualize a collection of worlds—Everett branches or Tegmark duplicates—within which there is some objective frequency at which box A or box B contains a diamond.1
The ink on paper is formed into odd shapes and curves, which look like this text: “And therefore, box B contains the diamond.” If you happened to be a literate English speaker, you might become confused, and think that this shaped ink somehow meant that box B contained the diamond. Subjects instructed to say the color of printed pictures and shown the word Green in red ink often say “green” instead of “red.” It helps to be illiterate, so that you are not confused by the shape of the ink.
To us, the true import of a thing is its entanglement with other things. Consider again the collection of worlds, Everett branches or Tegmark duplicates. At the moment when all clever arguers in all worlds put ink to the bottom line of their paper—let us suppose this is a single moment—it fixed the correlation of the ink with the boxes. The clever arguer writes in non-erasable pen; the ink will not change. The boxes will not change. Within the subset of worlds where the ink says “And therefore, box B contains the diamond,” there is already some fixed percentage of worlds where box A contains the diamond. This will not change regardless of what is written in on the blank lines above.
So the evidential entanglement of the ink is fixed, and I leave to you to decide what it might be. Perhaps box owners who believe a better case can be made for them are more liable to hire advertisers; perhaps box owners who fear their own deficiencies bid higher. If the box owners do not themselves understand the signs and portents, then the ink will be completely unentangled with the boxes’ contents, though it may tell you something about the owners’ finances and bidding habits.
Now suppose another person present is genuinely curious, and they first write down all the distinguishing signs of both boxes on a sheet of paper, and then apply their knowledge and the laws of probability and write down at the bottom: “Therefore, I estimate an 85% probability that box B contains the diamond.” Of what is this handwriting evidence? Examining the chain of cause and effect leading to this physical ink on physical paper, I find that the chain of causality wends its way through all the signs and portents of the boxes, and is dependent on these signs; for in worlds with different portents, a different probability is written at the bottom.
So the handwriting of the curious inquirer is entangled with the signs and portents and the contents of the boxes, whereas the handwriting of the clever arguer is evidence only of which owner paid the higher bid. There is a great difference in the indications of ink, though one who foolishly read aloud the ink-shapes might think the English words sounded similar.
Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts. If your car makes metallic squealing noises when you brake, and you aren’t willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing. But the actual percentage of you that survive in Everett branches or Tegmark worlds—which we will take to describe your effectiveness as a rationalist—is determined by the algorithm that decided which conclusion you would seek arguments for. In this case, the real algorithm is “Never repair anything expensive.” If this is a good algorithm, fine; if this is a bad algorithm, oh well. The arguments you write afterward, above the bottom line, will not change anything either way.
This is intended as a caution for your own thinking, not a Fully General Counterargument against conclusions you don’t like. For it is indeed a clever argument to say “My opponent is a clever arguer,” if you are paying yourself to retain whatever beliefs you had at the start. The world’s cleverest arguer may point out that the Sun is shining, and yet it is still probably daytime.
1Max Tegmark, “Parallel Universes,” in Science and Ultimate Reality: Quantum Theory, Cosmology, and Complexity, ed. John D. Barrow, Paul C. W. Davies, and Charles L. Harper Jr. (New York: Cambridge University Press, 2004), 459–491, http://arxiv.org/abs/astro-ph/0302131.
- How To Write Quickly While Maintaining Epistemic Rigor by (28 Aug 2021 17:52 UTC; 487 points)
- The Case Against AI Control Research by (21 Jan 2025 16:03 UTC; 366 points)
- How To Get Into Independent Research On Alignment/Agency by (19 Nov 2021 0:00 UTC; 362 points)
- Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by (24 Sep 2019 4:12 UTC; 325 points)
- The Feeling of Idea Scarcity by (31 Dec 2022 17:34 UTC; 256 points)
- Raising the Sanity Waterline by (12 Mar 2009 4:28 UTC; 248 points)
- An Alien God by (2 Nov 2007 6:57 UTC; 236 points)
- More Dakka by (2 Dec 2017 13:10 UTC; 233 points)
- Mistakes with Conservation of Expected Evidence by (8 Jun 2019 23:07 UTC; 232 points)
- Against EA-Community-Received-Wisdom on Practical Sociological Questions by (EA Forum; 9 Mar 2023 2:12 UTC; 202 points)
- Crisis of Faith by (10 Oct 2008 22:08 UTC; 196 points)
- Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk by (2 Nov 2023 18:20 UTC; 194 points)
- Slack matters more than any outcome by (31 Dec 2022 20:11 UTC; 165 points)
- How to Convince my Son that Drugs are Bad by (17 Dec 2022 18:47 UTC; 150 points)
- “Rationalist Discourse” Is Like “Physicist Motors” by (26 Feb 2023 5:58 UTC; 137 points)
- Limerence Messes Up Your Rationality Real Bad, Yo by (1 Jul 2022 16:53 UTC; 134 points)
- The Tragedy of Group Selectionism by (7 Nov 2007 7:47 UTC; 125 points)
- Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models” by (29 Jun 2020 2:45 UTC; 106 points)
- The Filan Cabinet Podcast with Oliver Habryka—Transcript by (14 Feb 2023 2:38 UTC; 103 points)
- What are the biggest misconceptions about biosecurity and pandemic risk? by (EA Forum; 29 Feb 2024 14:52 UTC; 100 points)
- A summary of every “Highlights from the Sequences” post by (15 Jul 2022 23:01 UTC; 98 points)
- What is the new EA question? by (EA Forum; 2 Mar 2022 20:40 UTC; 95 points)
- Building Phenomenological Bridges by (23 Dec 2013 19:57 UTC; 95 points)
- Unnatural Categories Are Optimized for Deception by (8 Jan 2021 20:54 UTC; 93 points)
- Science: Do It Yourself by (13 Feb 2011 4:47 UTC; 91 points)
- Anthropomorphic Optimism by (4 Aug 2008 20:17 UTC; 91 points)
- “Justice, Cherryl.” by (23 Jul 2023 16:16 UTC; 91 points)
- A Priori by (8 Oct 2007 21:02 UTC; 89 points)
- An Outside View on Less Wrong’s Advice by (7 Jul 2011 4:46 UTC; 86 points)
- Keltham’s Lectures in Project Lawful by (1 Apr 2025 10:39 UTC; 86 points)
- Novum Organum: Introduction by (19 Sep 2019 22:34 UTC; 86 points)
- 's comment on Announcing Alvea—A COVID Vaccine Project by (EA Forum; 25 Feb 2022 1:08 UTC; 81 points)
- Fake Optimization Criteria by (10 Nov 2007 0:10 UTC; 77 points)
- [Valence series] 3. Valence & Beliefs by (11 Dec 2023 20:21 UTC; 77 points)
- On sincerity by (23 Dec 2022 17:13 UTC; 76 points)
- Practical Advice Backed By Deep Theories by (25 Apr 2009 18:52 UTC; 72 points)
- 11 core rationalist skills by (2 Dec 2009 8:09 UTC; 72 points)
- Mythic Mode by (23 Feb 2018 22:45 UTC; 71 points)
- 's comment on AGI Ruin: A List of Lethalities by (6 Jun 2022 15:38 UTC; 70 points)
- A List of Nuances by (10 Nov 2014 5:02 UTC; 68 points)
- Hearsay, Double Hearsay, and Bayesian Updates by (16 Feb 2012 22:31 UTC; 68 points)
- What (standalone) LessWrong posts would you recommend to most EA community members? by (EA Forum; 9 Feb 2022 0:31 UTC; 67 points)
- What Can We Learn About Human Psychology from Christian Apologetics? by (21 Oct 2013 22:02 UTC; 66 points)
- LA-602 vs. RHIC Review by (19 Jun 2008 10:00 UTC; 65 points)
- Curating “The Epistemic Sequences” (list v.0.1) by (23 Jul 2022 22:17 UTC; 65 points)
- Against “Context-Free Integrity” by (14 Apr 2021 8:20 UTC; 62 points)
- Contra Yudkowsky on Epistemic Conduct for Author Criticism by (13 Sep 2023 15:33 UTC; 60 points)
- Signaling isn’t about signaling, it’s about Goodhart by (6 Jan 2022 18:49 UTC; 59 points)
- How seriously should we take the hypothesis that LW is just wrong on how AI will impact the 21st century? by (16 Feb 2023 15:25 UTC; 58 points)
- About Less Wrong by (23 Feb 2009 23:30 UTC; 57 points)
- Towards a comprehensive study of potential psychological causes of the ordinary range of variation of affective gender identity in males by (12 Oct 2022 21:10 UTC; 56 points)
- Back Up and Ask Whether, Not Why by (6 Nov 2008 19:20 UTC; 56 points)
- 's comment on AGI Ruin: A List of Lethalities by (1 Jan 2024 20:33 UTC; 56 points)
- How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? by (EA Forum; 27 Nov 2021 23:46 UTC; 55 points)
- Implicature Conflation by (9 Aug 2021 19:48 UTC; 55 points)
- Careless talk on US-China AI competition? (and criticism of CAIS coverage) by (EA Forum; 20 Sep 2023 12:46 UTC; 52 points)
- Not Technically Lying by (4 Jul 2009 18:40 UTC; 51 points)
- 's comment on Jonas’s Quick takes by (EA Forum; 18 Nov 2022 21:56 UTC; 50 points)
- Math is Subjunctively Objective by (25 Jul 2008 11:06 UTC; 50 points)
- The Hostile Arguer by (27 Nov 2014 0:30 UTC; 50 points)
- 5 Axioms of Decision Making by (1 Dec 2011 22:22 UTC; 50 points)
- Covid 1/27/22: Let My People Go by (27 Jan 2022 17:00 UTC; 49 points)
- Making Bad Decisions On Purpose by (9 Nov 2023 3:36 UTC; 49 points)
- Understanding vipassana meditation by (3 Oct 2010 18:12 UTC; 48 points)
- Formalizing the “AI x-risk is unlikely because it is ridiculous” argument by (3 May 2023 18:56 UTC; 48 points)
- A summary of every “Highlights from the Sequences” post by (EA Forum; 15 Jul 2022 23:05 UTC; 47 points)
- Challenges to Yudkowsky’s Pronoun Reform Proposal by (13 Mar 2022 20:38 UTC; 47 points)
- On sincerity by (EA Forum; 23 Dec 2022 17:14 UTC; 46 points)
- Inherited Improbabilities: Transferring the Burden of Proof by (24 Nov 2010 3:40 UTC; 46 points)
- I think Michael Bailey’s dismissal of my autogynephilia questions for Scott Alexander and Aella makes very little sense by (10 Jul 2023 17:39 UTC; 46 points)
- Why Academic Papers Are A Terrible Discussion Forum by (20 Jun 2012 18:15 UTC; 44 points)
- ChatGPT and Ideological Turing Test by (5 Dec 2022 21:45 UTC; 42 points)
- The Missing Math of Map-Making by (28 Aug 2019 21:18 UTC; 40 points)
- Sunk Cost Fallacy by (12 Apr 2009 17:30 UTC; 40 points)
- Marketing Rationality by (18 Nov 2015 13:43 UTC; 39 points)
- Selective processes bring tag-alongs (but not always!) by (11 Mar 2009 8:17 UTC; 39 points)
- 's comment on An argument that animals don’t really suffer by (7 Jan 2012 9:28 UTC; 39 points)
- Awful Austrians by (12 Apr 2009 6:06 UTC; 38 points)
- On Writing #1 by (4 Mar 2025 13:30 UTC; 38 points)
- 's comment on Beautiful Probability by (14 Jan 2008 8:33 UTC; 37 points)
- Remote AI Alignment Overhang? by (19 Feb 2023 22:30 UTC; 37 points)
- What are the most promising strategies for reducing the probability of nuclear war? by (EA Forum; 16 Nov 2022 6:09 UTC; 36 points)
- 's comment on Compartmentalization in epistemic and instrumental rationality by (17 Sep 2010 17:53 UTC; 36 points)
- Formative Youth by (24 Feb 2009 23:02 UTC; 33 points)
- SotW: Avoid Motivated Cognition by (28 May 2012 15:57 UTC; 33 points)
- 's comment on Wild Moral Dilemmas by (13 May 2015 14:40 UTC; 31 points)
- The Weak Inside View by (18 Nov 2008 18:37 UTC; 31 points)
- Two kinds of Expectations, *one* of which is helpful for rational thinking by (20 Jun 2016 16:04 UTC; 30 points)
- How to enjoy being wrong by (27 Jul 2011 5:48 UTC; 30 points)
- Jim Babcock’s Mainline Doom Scenario: Human-Level AI Can’t Control Its Successor by (9 May 2025 5:20 UTC; 30 points)
- 's comment on Circular Reasoning by (5 Aug 2024 19:54 UTC; 29 points)
- What should I have for dinner? (A case study in decision making) by (12 Aug 2010 13:29 UTC; 29 points)
- Some of the best rationality essays by (19 Oct 2021 22:57 UTC; 29 points)
- An unofficial “Highlights from the Sequences” tier list by (5 Sep 2022 14:07 UTC; 29 points)
- Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles by (2 Mar 2024 22:05 UTC; 28 points)
- Intelligence in Commitment Races by (24 Jun 2022 14:30 UTC; 28 points)
- 's comment on Studies On Slack by (13 May 2020 15:22 UTC; 26 points)
- A Hill of Validity in Defense of Meaning by (15 Jul 2023 17:57 UTC; 26 points)
- 's comment on A Crash Course in the Neuroscience of Human Motivation by (18 Aug 2011 15:50 UTC; 26 points)
- A Genius for Destruction by (1 Aug 2008 19:25 UTC; 25 points)
- 's comment on TurnTrout’s shortform feed by (30 Jun 2025 7:42 UTC; 25 points)
- Decoherent Essences by (30 Apr 2008 6:32 UTC; 24 points)
- 's comment on Dialogue on Appeals to Consequences by (5 Dec 2019 1:26 UTC; 24 points)
- Navigation by Moonlight by (7 Apr 2025 15:32 UTC; 24 points)
- What is Rationality? by (1 Apr 2010 20:14 UTC; 22 points)
- Keltham on Becoming more Truth-Oriented by (28 Apr 2025 12:58 UTC; 22 points)
- 's comment on One last roll of the dice by (3 Feb 2012 4:50 UTC; 22 points)
- That Crisis thing seems pretty useful by (10 Apr 2009 17:10 UTC; 21 points)
- 's comment on Some thoughts on vegetarianism and veganism by (14 Feb 2022 17:34 UTC; 20 points)
- Brief Response to Suspended Reason on Parallels Between Skyrms on Signaling and Yudkowsky on Language and Evidence by (16 Apr 2020 3:44 UTC; 20 points)
- Lighthaven Sequences Reading Group #8 (Tuesday 10/29) by (27 Oct 2024 23:55 UTC; 20 points)
- 's comment on Open Thread March 21 - March 27, 2016 by (23 Mar 2016 17:41 UTC; 20 points)
- Lighthaven Sequences Reading Group #3 (Tuesday 09/24) by (22 Sep 2024 2:24 UTC; 20 points)
- 's comment on LW should go into mainstream academia ? by (13 May 2015 22:31 UTC; 18 points)
- Rationality, Cryonics and Pascal’s Wager by (8 Apr 2009 20:28 UTC; 18 points)
- 's comment on Why Are Individual IQ Differences OK? by (14 Aug 2012 11:57 UTC; 18 points)
- 's comment on Common misconceptions about OpenAI by (27 Aug 2022 22:33 UTC; 17 points)
- 's comment on Is it okay to take toilet-pills? / Rationality vs. the disgust factor by (25 Jul 2011 19:18 UTC; 17 points)
- 's comment on The Power to Demolish Bad Arguments by (10 Jan 2021 22:39 UTC; 16 points)
- Explanation vs Rationalization by (22 Feb 2018 23:46 UTC; 16 points)
- Careless talk on US-China AI competition? (and criticism of CAIS coverage) by (20 Sep 2023 12:46 UTC; 16 points)
- What About The Horses? by (11 Feb 2025 13:59 UTC; 16 points)
- 's comment on A shortcoming of concrete demonstrations as AGI risk advocacy by (11 Dec 2024 19:49 UTC; 15 points)
- List of Fully General Counterarguments by (18 Jul 2015 21:49 UTC; 15 points)
- 's comment on Welcome to Less Wrong! (2010-2011) by (13 May 2011 0:35 UTC; 15 points)
- 's comment on What are the Best Hammers in the Rationalist Community? by (26 Jan 2018 22:13 UTC; 15 points)
- 's comment on George Orwell’s Prelude on Politics Is The Mind Killer by (29 Mar 2012 18:21 UTC; 14 points)
- 's comment on [Link] Why the kids don’t know no algebra by (4 Jul 2012 19:03 UTC; 14 points)
- What About The Horses? by (EA Forum; 16 Oct 2025 16:48 UTC; 13 points)
- 's comment on Traditional Capitalist Values by (26 Nov 2010 6:38 UTC; 13 points)
- Training Regime Day 7: Goal Factoring by (21 Feb 2020 17:55 UTC; 13 points)
- 's comment on Dialogue on Appeals to Consequences by (30 Nov 2019 18:20 UTC; 13 points)
- 's comment on TurnTrout’s shortform feed by (1 Jul 2025 5:51 UTC; 13 points)
- Is Effective Volunteering Possible? by (19 May 2023 12:41 UTC; 13 points)
- 's comment on Welcome to Less Wrong! (July 2012) by (20 Feb 2013 23:40 UTC; 13 points)
- Arguments Against Fossil Future? by (31 May 2023 13:41 UTC; 13 points)
- 's comment on The Game of Masks by (30 Apr 2022 21:22 UTC; 13 points)
- 's comment on Does anyone know any kid geniuses? by (28 Mar 2012 18:13 UTC; 12 points)
- Online Privacy: Should you Care? by (11 Feb 2022 13:20 UTC; 12 points)
- 's comment on Why I Work on Ads by (4 May 2021 5:57 UTC; 12 points)
- 's comment on Why is reddit so negative? by (9 Feb 2011 12:26 UTC; 12 points)
- 's comment on Vegetarianism and depression by (10 Oct 2022 14:33 UTC; 12 points)
- 's comment on Against the standard narrative of human sexual evolution by (25 Jul 2010 0:58 UTC; 12 points)
- 's comment on Don’t Revere The Bearer Of Good Info by (22 Mar 2009 1:19 UTC; 12 points)
- 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (15 May 2013 22:15 UTC; 12 points)
- 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (20 Jul 2019 15:08 UTC; 11 points)
- Charitable Cryonics by (4 Aug 2011 0:42 UTC; 11 points)
- 's comment on List of Fully General Counterarguments by (19 Jul 2015 14:09 UTC; 11 points)
- 's comment on Welcome to Less Wrong! (2010-2011) by (13 Aug 2010 15:38 UTC; 11 points)
- 's comment on Open Thread June 2010, Part 3 by (15 Jun 2010 4:23 UTC; 11 points)
- 's comment on What would you like from Microcovid.org? How valuable would it be to you? by (30 Dec 2021 1:44 UTC; 11 points)
- 's comment on [LINK] Autistic woman banned from having sex in latest Court of Protection case by (6 Feb 2012 17:21 UTC; 11 points)
- 's comment on The Curve of Capability by (4 Nov 2010 23:05 UTC; 10 points)
- [LINK] The NYT on Everyday Habits by (18 Feb 2012 8:23 UTC; 10 points)
- 's comment on The Most Frequently Useful Thing by (2 Mar 2009 1:56 UTC; 10 points)
- [ASoT] Some thoughts about LM monologue limitations and ELK by (30 Mar 2022 14:26 UTC; 10 points)
- 's comment on Raising the Sanity Waterline by (12 Mar 2009 17:54 UTC; 10 points)
- 's comment on Normal Cryonics by (21 Jan 2010 16:52 UTC; 10 points)
- 's comment on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms by (18 Mar 2018 17:18 UTC; 10 points)
- 's comment on MIRI 2024 Communications Strategy by (31 May 2024 2:21 UTC; 10 points)
- 's comment on Open Thread: April 2010, Part 2 by (22 Apr 2010 11:00 UTC; 9 points)
- 's comment on Shikamaru vs. the Logical Fallacies by (4 Mar 2011 2:44 UTC; 9 points)
- Summarizing the Sequences Proposal by (4 Aug 2011 21:15 UTC; 9 points)
- 's comment on The Most Frequently Useful Thing by (1 Mar 2009 20:11 UTC; 9 points)
- 's comment on Of Exclusionary Speech and Gender Politics by (9 Apr 2010 16:20 UTC; 9 points)
- 's comment on Atheism = Untheism + Antitheism by (2 Jul 2009 18:50 UTC; 9 points)
- Lighthaven Sequences Reading Group #43 (Tuesday 7/22) by (16 Jul 2025 4:39 UTC; 9 points)
- 's comment on [Meta] New moderation tools and moderation guidelines by (23 Jun 2025 23:07 UTC; 9 points)
- 's comment on The Most Important Thing You Learned by (27 Feb 2009 23:47 UTC; 9 points)
- 's comment on Conservatism is a rational response to epistemic uncertainty by (1 Aug 2022 18:44 UTC; 9 points)
- 's comment on Rationality Quotes November 2012 by (2 Nov 2012 18:17 UTC; 9 points)
- 's comment on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” by (15 Apr 2021 15:46 UTC; 9 points)
- 's comment on Even non-theists should act as if theism is true by (EA Forum; 9 Nov 2018 20:46 UTC; 8 points)
- 's comment on Thoughts on LessWrong norms, the Art of Discourse, and moderator mandate by (12 May 2023 18:56 UTC; 8 points)
- 's comment on Agent foundations: not really math, not really science by (21 Aug 2025 12:26 UTC; 8 points)
- 's comment on Open Thread, September 1-15, 2012 by (4 Sep 2012 7:22 UTC; 8 points)
- Against the Bottom Line by (21 Apr 2012 10:20 UTC; 8 points)
- 's comment on How To Get Into Independent Research On Alignment/Agency by (28 Nov 2021 3:29 UTC; 8 points)
- [LINK] - Aaron Sell (Psychology Today) on the Politicisation of Science by (28 Aug 2013 20:25 UTC; 8 points)
- 's comment on “Rationalist Discourse” Is Like “Physicist Motors” by (2 Mar 2023 6:42 UTC; 8 points)
- 's comment on Can anyone be rational and not vegan? by (24 Nov 2016 23:16 UTC; 8 points)
- 's comment on Open Thread, May 1-14, 2013 by (10 May 2013 1:37 UTC; 8 points)
- 's comment on Consciousness doesn’t exist. by (3 Oct 2010 2:38 UTC; 8 points)
- 's comment on Alexander’s Shortform by (6 Oct 2021 12:36 UTC; 8 points)
- 's comment on Many arguments for AI x-risk are wrong by (5 Mar 2024 18:29 UTC; 8 points)
- 's comment on Hazard’s Shortform Feed by (4 Dec 2019 14:32 UTC; 7 points)
- Litanies Of The Way by (8 Jun 2025 7:32 UTC; 7 points)
- 's comment on Is Kiryas Joel an Unhappy Place? by (26 Apr 2011 11:35 UTC; 7 points)
- 's comment on Amanda Knox: post mortem by (23 Oct 2011 23:26 UTC; 7 points)
- Clever arguers give weak evidence, not zero by (18 Jul 2023 17:07 UTC; 7 points)
- 's comment on Seeking links for the best arguments for economic libertarianism by (3 May 2012 18:39 UTC; 7 points)
- 's comment on How to illustrate that society is mostly irrational, and how rationality would be beneficial by (14 Feb 2014 9:16 UTC; 7 points)
- Supposing that the “Dead Internet Theory” is true or largely true, how can we act on that information? by (27 Jan 2025 16:47 UTC; 7 points)
- 's comment on Which areas of rationality are underexplored? - Discussion Thread by (3 Dec 2016 19:34 UTC; 7 points)
- 's comment on An argument that animals don’t really suffer by (7 Jan 2012 10:10 UTC; 7 points)
- How not to move the goalposts by (12 Jun 2011 15:45 UTC; 7 points)
- 's comment on Popperian Decision making by (7 Apr 2011 15:24 UTC; 7 points)
- 's comment on Rationality Quotes: March 2011 by (6 Mar 2011 6:28 UTC; 7 points)
- Rationality Reading Group: Part G: Against Rationalization by (12 Aug 2015 22:09 UTC; 7 points)
- 's comment on On presenting the case for AI risk by (EA Forum; 9 Mar 2022 19:25 UTC; 6 points)
- 's comment on Intelligence explosion in organizations, or why I’m not worried about the singularity by (27 Dec 2012 19:17 UTC; 6 points)
- 's comment on Questions to ask theist philosophers? I will soon be speaking with several by (26 Apr 2014 5:21 UTC; 6 points)
- 's comment on The Case Against AI Control Research by (5 Feb 2025 17:19 UTC; 6 points)
- 's comment on Belief in Belief by (29 Apr 2010 14:16 UTC; 6 points)
- 's comment on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom by (18 Dec 2009 0:09 UTC; 6 points)
- 's comment on Rationality Quotes: December 2010 by (4 Dec 2010 16:39 UTC; 6 points)
- 's comment on Welcome to Less Wrong! (2012) by (31 Dec 2011 0:11 UTC; 6 points)
- 's comment on “Rationalist Discourse” Is Like “Physicist Motors” by (4 Mar 2023 8:13 UTC; 6 points)
- 's comment on Beautiful Probability by (11 Oct 2011 20:13 UTC; 6 points)
- 's comment on An unofficial “Highlights from the Sequences” tier list by (5 Sep 2022 18:32 UTC; 6 points)
- 's comment on Essay-Question Poll: Dietary Choices by (4 May 2009 10:42 UTC; 6 points)
- 's comment on [Lecture Club] Awakening from the Meaning Crisis by (11 Mar 2021 17:08 UTC; 6 points)
- 's comment on jacquesthibs’s Shortform by (28 May 2024 22:16 UTC; 6 points)
- 's comment on Anti-rationality quotes by (17 Apr 2009 20:38 UTC; 6 points)
- 's comment on Belief as Attire by (12 Jun 2010 2:01 UTC; 6 points)
- 's comment on Factions, inequality, and social justice by (4 Dec 2012 16:41 UTC; 6 points)
- 's comment on The quotation mark by (6 Oct 2025 6:48 UTC; 6 points)
- Brutalist Prose by (7 Nov 2025 0:59 UTC; 6 points)
- 's comment on Study results: The most convincing argument for effective donations by (EA Forum; 27 Aug 2020 17:22 UTC; 5 points)
- 's comment on Open Thread: March 2010 by (1 Mar 2010 19:21 UTC; 5 points)
- 's comment on Above-Average AI Scientists by (1 Jun 2013 23:11 UTC; 5 points)
- 's comment on Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell? by (27 Jun 2023 10:38 UTC; 5 points)
- 's comment on Training Regime Day 7: Goal Factoring by (22 Feb 2020 21:46 UTC; 5 points)
- 's comment on [SEQ RERUN] Blue or Green on Regulation? by (28 Jun 2011 11:06 UTC; 5 points)
- [SEQ RERUN] The Bottom Line by (11 Sep 2011 3:25 UTC; 5 points)
- 's comment on Open Thread: November 2009 by (5 Nov 2009 5:00 UTC; 5 points)
- 's comment on Reliably wrong by (9 Dec 2010 21:18 UTC; 5 points)
- 's comment on Why I am no longer driven by (18 Nov 2021 4:57 UTC; 5 points)
- 's comment on Open Thread September, Part 3 by (1 Oct 2010 17:37 UTC; 5 points)
- 's comment on Why not socialism? by (EA Forum; 16 May 2024 15:01 UTC; 4 points)
- 's comment on Positive Thinking by (23 Mar 2011 18:02 UTC; 4 points)
- 's comment on Abnormal Cryonics by (30 May 2010 4:02 UTC; 4 points)
- 's comment on Logical Counterfactuals & the Cooperation Game by (15 Aug 2018 13:59 UTC; 4 points)
- 's comment on Viliam’s Shortform by (20 Aug 2023 21:28 UTC; 4 points)
- 's comment on Help Fund Lukeprog at SIAI by (24 Aug 2011 17:23 UTC; 4 points)
- 's comment on The $125,000 Summer Singularity Challenge by (4 Aug 2011 11:31 UTC; 4 points)
- 's comment on Religion’s Claim to be Non-Disprovable by (23 Jul 2011 6:09 UTC; 4 points)
- 's comment on Unknown knowns: Why did you choose to be monogamous? by (28 Jun 2010 21:15 UTC; 4 points)
- The Skill of Writing Facetiously by (27 Jan 2022 22:44 UTC; 4 points)
- Norfolk Social—VA Rationalists by (10 Oct 2022 0:09 UTC; 4 points)
- 's comment on Open Thread, December 16-31, 2012 by (30 Dec 2012 23:11 UTC; 4 points)
- 's comment on [SEQ RERUN] Conjunction Controversy (Or, How They Nail It Down) by (3 Sep 2011 10:39 UTC; 4 points)
- 's comment on Tetherware #1: The case for humanlike AI with free will by (EA Forum; 30 Jan 2025 16:49 UTC; 3 points)
- [Discussion] Are academic papers a terrible discussion forum for effective altruists? by (EA Forum; 5 Jun 2015 23:30 UTC; 3 points)
- 's comment on The Human Future (x-risk and longtermism-themed video by melodysheep) by (EA Forum; 23 Aug 2023 2:17 UTC; 3 points)
- 's comment on The Meditation on Curiosity by (30 Jan 2010 7:24 UTC; 3 points)
- 's comment on What I would like the SIAI to publish by (4 Nov 2010 17:26 UTC; 3 points)
- 's comment on Why Our Kind Can’t Cooperate by (24 Mar 2011 6:02 UTC; 3 points)
- 's comment on Find yourself a Worthy Opponent: a Chavruta by (7 Jul 2011 3:09 UTC; 3 points)
- 's comment on The Power to Demolish Bad Arguments by (11 Jan 2021 1:40 UTC; 3 points)
- 's comment on David Udell’s Shortform by (24 Apr 2022 6:01 UTC; 3 points)
- 's comment on It’s okay to be (at least a little) irrational by (13 Apr 2009 21:42 UTC; 3 points)
- 's comment on Survey Results by (13 May 2009 22:21 UTC; 3 points)
- 's comment on Open Thread: June 2010 by (3 Jun 2010 5:32 UTC; 3 points)
- 's comment on LW Women: LW Online by (15 Feb 2013 14:57 UTC; 3 points)
- 's comment on LW’s image problem: “Rationality” is suspicious by (20 Jul 2011 8:54 UTC; 3 points)
- 's comment on hamnox’s Shortform by (23 Sep 2019 0:39 UTC; 3 points)
- 's comment on A sense of logic by (11 Dec 2010 0:59 UTC; 3 points)
- 's comment on [Book Review] “The Bell Curve” by Charles Murray by (16 Nov 2021 6:50 UTC; 3 points)
- notes on prioritizing tasks & cognition-threads by (26 Nov 2024 0:28 UTC; 3 points)
- 's comment on Important fact about how people evaluate sets of arguments by (14 Feb 2023 7:22 UTC; 3 points)
- 's comment on Require contributions in advance by (8 Feb 2016 13:52 UTC; 3 points)
- 's comment on Open Thread, Apr. 20 - Apr. 26, 2015 by (21 Apr 2015 23:30 UTC; 3 points)
- 's comment on Vegetarianism and depression by (EA Forum; 10 Oct 2022 14:32 UTC; 2 points)
- 's comment on If you value future people, why do you consider near term effects? by (EA Forum; 11 Apr 2020 3:04 UTC; 2 points)
- 's comment on The Meditation on Curiosity by (29 Jan 2010 20:13 UTC; 2 points)
- 's comment on An epistemic advantage of working as a moderate by (30 Aug 2025 20:37 UTC; 2 points)
- 's comment on Should LW suggest standard metaprompts? by (22 Aug 2024 14:29 UTC; 2 points)
- 's comment on Group Rationality Diary, August 1-15 by (2 Oct 2013 2:19 UTC; 2 points)
- 's comment on CstineSublime’s Shortform by (3 Sep 2025 6:12 UTC; 2 points)
- 's comment on Shut Up and Divide? by (11 Feb 2010 19:25 UTC; 2 points)
- 's comment on l8c’s Shortform by (5 Aug 2025 14:46 UTC; 2 points)
- 's comment on Extreme Rationality: It’s Not That Great by (9 Apr 2009 12:52 UTC; 2 points)
- 's comment on Cryonics is Quantum Suicide (minus the suicide) by (14 Aug 2011 20:15 UTC; 2 points)
- 's comment on [New] Rejected Content Section by (4 May 2023 20:41 UTC; 2 points)
- 's comment on That Magical Click by (23 Jan 2010 7:01 UTC; 2 points)
- 's comment on The Dark Arts As A Scaffolding Skill For Rationality by (3 Aug 2025 3:44 UTC; 2 points)
- 's comment on The Dark Arts—Preamble by (12 Oct 2010 1:48 UTC; 2 points)
- 's comment on Arational quotes by (14 Apr 2011 18:43 UTC; 2 points)
- 's comment on Launching the $10,000 Existential Hope Meme Prize by (25 Sep 2025 14:01 UTC; 2 points)
- 's comment on Rationality Quotes—April 2009 by (19 Apr 2009 12:12 UTC; 2 points)
- 's comment on The ’Why’s of an International Auxiliary Language (IAL part 1) by (9 Feb 2022 15:50 UTC; 2 points)
- 's comment on SotW: Avoid Motivated Cognition by (30 May 2012 19:49 UTC; 2 points)
- 's comment on How valuable is volunteering? by (1 Apr 2014 13:41 UTC; 2 points)
- 's comment on On “Geeks, MOPs, and Sociopaths” by (23 Jan 2024 8:42 UTC; 2 points)
- 's comment on Memory in the microtubules by (27 Mar 2012 19:58 UTC; 2 points)
- 最下行の結論 by (EA Forum; 17 Jul 2023 17:19 UTC; 1 point)
- 's comment on [Question] What’s your Elevator Pitch For Rationality? by (9 Sep 2011 5:13 UTC; 1 point)
- 's comment on Rationality Quotes July 2012 by (6 Jul 2012 11:30 UTC; 1 point)
- 's comment on Only You Can Prevent Your Mind From Getting Killed By Politics by (27 Nov 2013 18:53 UTC; 1 point)
- 's comment on Needing Better PR by (19 Aug 2011 14:32 UTC; 1 point)
- 's comment on Open Thread for February 3 − 10 by (9 Feb 2014 22:04 UTC; 1 point)
- 's comment on David Udell’s Shortform by (19 Aug 2022 20:56 UTC; 1 point)
- 's comment on Extreme Rationality: It’s Not That Great by (10 Apr 2009 20:58 UTC; 1 point)
- 's comment on Computational Morality (Part 1) - a Proposed Solution by (1 May 2018 23:58 UTC; 1 point)
- 's comment on Computational Morality (Part 1) - a Proposed Solution by (1 May 2018 4:55 UTC; 1 point)
- 's comment on Open Thread Feb 22 - Feb 28, 2016 by (24 Oct 2018 8:49 UTC; 1 point)
- The Human Grace Period — How Systems Decide Who Gets to Stay Alive by (22 Oct 2025 21:31 UTC; 1 point)
- 's comment on Feeling Rational by (17 Nov 2012 21:25 UTC; 1 point)
- Book Review: Weapons of Math Destruction by (4 Jun 2017 21:20 UTC; 1 point)
- 's comment on Examples of Rationality Techniques adopted by the Masses by (7 Jun 2014 20:50 UTC; 1 point)
- 's comment on The most important meta-skill by (27 May 2015 21:56 UTC; 1 point)
- 's comment on Motivated skepticism: it’s harder to avoid than I’d think by (9 Oct 2011 22:08 UTC; 1 point)
- 's comment on [Meta] New moderation tools and moderation guidelines by (16 Jun 2025 15:20 UTC; 1 point)
- 's comment on Beautiful Probability by (14 Jan 2008 16:28 UTC; 1 point)
- 's comment on We Change Our Minds Less Often Than We Think by (20 Feb 2024 11:13 UTC; 1 point)
- 's comment on Open Thread, May 16-31, 2012 by (16 May 2012 14:12 UTC; 1 point)
- 's comment on Less Wrong: Open Thread, September 2010 by (18 Sep 2010 18:50 UTC; 1 point)
- 's comment on Mandatory Secret Identities by (9 Apr 2009 1:32 UTC; 1 point)
- 's comment on strangepoop’s Shortform by (25 Jul 2020 20:02 UTC; 1 point)
- 's comment on Estimates vs. head-to-head comparisons by (5 May 2013 19:47 UTC; 1 point)
- 's comment on quila’s Shortform by (21 Jul 2024 5:17 UTC; 1 point)
- 's comment on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions by (15 Nov 2009 3:54 UTC; 1 point)
- 's comment on Some Considerations Against Short-Term and/or Explicit Focus on Existential Risk Reduction by (27 Feb 2011 5:21 UTC; 1 point)
- 's comment on Open Thread, May 1-14, 2013 by (10 May 2013 17:51 UTC; 1 point)
- 's comment on Most smart and skilled people are outside of the EA/rationalist community: an analysis by (16 Jul 2024 13:21 UTC; 1 point)
- 's comment on I would have shit in that alley, too by (21 Jun 2024 1:56 UTC; 1 point)
- 's comment on Rationality Quotes: February 2010 by (12 Apr 2010 1:45 UTC; 1 point)
- 's comment on Honoring Petrov Day on LessWrong, in 2019 by (3 Oct 2019 16:16 UTC; 1 point)
- 's comment on Rationality Quotes November 2012 by (2 Nov 2012 4:03 UTC; 1 point)
- 's comment on Open Thread: April 2010, Part 2 by (9 Apr 2010 7:08 UTC; 0 points)
- 's comment on “Nahh, that wouldn’t work” by (30 Nov 2010 5:35 UTC; 0 points)
- 's comment on Why Our Kind Can’t Cooperate by (24 Mar 2011 5:33 UTC; 0 points)
- 's comment on Diseased thinking: dissolving questions about disease by (16 Nov 2013 19:35 UTC; 0 points)
- 's comment on When (Not) To Use Probabilities by (8 Apr 2017 6:51 UTC; 0 points)
- 's comment on Experiment: Knox case debate with Rolf Nelson by (3 Aug 2011 16:22 UTC; 0 points)
- 's comment on Rationality Quotes September 2012 by (12 Sep 2012 10:09 UTC; 0 points)
- 's comment on Blues, Greens and abortion by (6 Mar 2011 6:48 UTC; 0 points)
- 's comment on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription by (11 Jan 2014 17:11 UTC; 0 points)
- 's comment on What causes people to believe in conspiracy theories? by (7 May 2011 2:57 UTC; 0 points)
- 's comment on A Little Puzzle about Termination by (8 Feb 2013 2:24 UTC; 0 points)
- 's comment on Open thread, Sep. 12 - Sep. 18, 2016 by (14 Sep 2016 21:09 UTC; 0 points)
- 's comment on Open Thread Feb 22 - Feb 28, 2016 by (25 Feb 2016 10:42 UTC; 0 points)
- 's comment on Open Thread, January 16-31, 2013 by (23 Jan 2013 2:06 UTC; 0 points)
- 's comment on What is wrong with “Traditional Rationality”? by (9 Apr 2011 15:08 UTC; 0 points)
- 's comment on The Obesity Myth by (30 Jul 2009 14:21 UTC; 0 points)
- 's comment on You don’t need Kant by (24 Mar 2012 7:28 UTC; 0 points)
- 's comment on Open Thread: November 2009 by (6 Nov 2009 3:39 UTC; 0 points)
- 's comment on Our Phyg Is Not Exclusive Enough by (16 Apr 2012 17:49 UTC; 0 points)
- 's comment on AI Risk and Opportunity: A Strategic Analysis by (15 Apr 2012 21:26 UTC; 0 points)
- 's comment on Epistemic Trust: Clarification by (15 Jun 2015 18:29 UTC; 0 points)
- 's comment on What Evidence Filtered Evidence? by (26 Dec 2012 7:39 UTC; 0 points)
- 's comment on What Evidence Filtered Evidence? by (2 Apr 2015 0:48 UTC; 0 points)
- 's comment on Why do people ____? by (10 May 2012 8:59 UTC; 0 points)
- 's comment on Pascal’s wager re-examined by (4 Oct 2011 6:41 UTC; 0 points)
- 's comment on Thou Art Godshatter by (15 Nov 2007 19:04 UTC; -1 points)
- 's comment on Should we be biased? by (27 Apr 2009 16:57 UTC; -1 points)
- 's comment on How should I help us achieve immortality? by (11 May 2011 22:59 UTC; -2 points)
- 's comment on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” by (15 Apr 2021 18:55 UTC; -3 points)
- What Are Your Preferences Regarding The FLI Letter? by (1 Apr 2023 4:52 UTC; -4 points)
- 's comment on What is missing from rationality? by (28 Apr 2010 19:28 UTC; -6 points)
- 's comment on By Which It May Be Judged by (31 Dec 2012 18:58 UTC; -6 points)
- The EA case for Trump 2024 by (EA Forum; 2 Aug 2024 19:32 UTC; -8 points)
For the person who reads and evaluates the arguments, the question is: what would count as evidence about whether the author wrote the conclusion down first or at the end of his analysis? It is noteworthy that most media, such as newspapers or academic journals, appear to do little to communicate such evidence. So either this is hard evidence to obtain, or few readers are interested in it.
“What would count as evidence about whether the author wrote the conclusion down first or at the end of his analysis?”:
Past history of accuracy/trustworthiness;
Evidence of a lack of incentive for bias;
Spot check results for sampling bias.
The last may be unreliable if a) you’re the author, or b) your spot check evidence source may be biased, e.g. by a generally accepted biased paradigm.
In the real world this is complicated by the fact that the bottom line may have only been “pencilled in”, biased the argument, then been adjusted as a result of the argument—e.g.
“Pencilled in” bottom line is 65;
Unbiased bottom line would be 45;
Adjusted bottom line is 55; - neither correct, nor as incorrect as the original “pencilled in” value.
This “weak bias” algorithm can be recursive, leading eventually (sometimes over many years) to virtual elimination of the original bias, as often happens in scientific and philosophical discourse.
If you’re reading someone else’s article, then it’s important to know whether you’re dealing with a sampling bias when looking at the arguments (more on this later). But my main point was about the evidence we should derive from our own conclusions, not about a Fully General Counterargument you could use to devalue someone else’s arguments. If you are paid to cleverly argue, then it is indeed a clever argument to say, “My opponent is only arguing cleverly, so I will discount it.”
However, it is important to try to determine whether someone is a clever arguer or a curious inquirer when they are trying to convince you of something. i.e. if you were in the diamond box scenario you should conclude (all other things being roughly equal) the curious inquirer’s conclusion to be more likely to be true than the clever arguer’s. It doesn’t really matter whether the source is internal or external. As long as you’re making the right determination. Basically, if you’re going to think about whether or not someone is being a clever arguer or a curious inquirer, you have to be a curious inquirer about getting that information, not trying to cleverly make a Fully General Counterargument.
A sign S “means” something T when S is a reliable indicator of T. In this case, the clever arguer has sabotaged that reliability.
ISTM the parable presupposes (and needs to) that what the clever arguer produces is ordinarily a reliable indicator that box B contained the diamond, ie ordinarily means that. It would be pointless otherwise.
Therein lies a question: Is he neccessarily able to sabotage it? Posed in the contrary way, are there formats which he can’t effectively sabotage but which suffice to express the interesting arguments?
There are formats that he can’t sabotage, such as rigorous machine-verifiable proof, but it is a great deal of work to use them even for their natural subject matter. So yes with difficulty for math-like topics.
For science-like topics in general, I think the answer is probably that it’s theoretically possible. It needs more than verifiable logic, though. Onlookers need to be able to verify experiments, and interpretive frameworks need to be managed, which is very hard.
For squishier topics, I make no answer.
The trick is to counterfeit the blue stamps :)
Can anyone give me the link here between Designing Social Inquiry by KKV and this post, because I feel that there is one.
I don’t think it’s either. Consider the many blog postings and informal essays—often on academic topics—which begin or otherwise include a narrative along the lines of ‘so I was working on X and I ran into an interesting problem/a strange thought popped up, and I began looking into it...’ They’re interesting (at least to me), and common.
So I think the reason we don’t see it is that A) it looks biased if your Op-ed on, say, the latest bailout goes ‘So I was watching Fox News and I heard what those tax-and-spend liberals were planning this time...’, so that’s incentive to avoid many origin stories; and B) it’s seen as too personal and informal. Academic papers are supposed to be dry, timeless, and rigorous. It would be seen as in bad taste if Newton’s Principia had opened with an anecdote about a summer day out in the orchard.
Non Sequitur presents the bottom line literally.
...And your effectiveness as a person is determined by whichever algorithm actually causes your actions.
Define “effectiveness as a person”—in many cases the bias leading to the pre-written conclusion has some form of survival value (e.g. social survival). Due partly to childhood issues resulting in a period of complete? rejection of the value of emotions, I have an unusually high resistance to intellectual bias, yet on a number of measures of “effectiveness as a person” I do not seem to be measuring up well yet (on some others I seem to be doing okay).
Also, as I mentioned in my reply to the first comment, real world algorithms are often an amalgam of the two approaches, so it is not so much which algorithm as what weighting the approaches get. In most (if not all) people this weighting changes with the subject, not just with the person’s general level of rationality/intellectual honesty.
As it is almost impossible to detect and neutralize all of one’s biases and assumptions, and dangerous to attempt “counter-bias”, arriving at a result known to be truly unbiased is rare. NOTE: Playing “Devil’s Advocate” sensibly is not “counter-bias” and in a reasonable entity will help to reveal and neutralize bias.
I think bias is irrelevant here. My point was that, whatever your definition of “effectiveness as a person”, your actions are determined by the algorithm that caused them, not by the algorithm that you profess to follow.
I guess that this algorithm is called emotions and we are mostly an emotional dog wagging a rational a tail.
You might be tempted to say “Well, this is kinda obvious.” but from my experience, LW included, most people are not aware of and don’t spend any time considering what emotions are really driving their bottom line and instead get lost discussing superficial arguments ad nauseam.
The idea here has stuck with me as one of the best nuggets of wisdom from the sequences. My current condensation of it is as follows:
If you let reality have the final word, you might not like the bottom line. If instead you keep deliberating until the balance of arguments supports your preferred conclusion, you’re almost guaranteed to be satisfied eventually!
Inspired by the above, I offer the pseudo code version...
… the code above implements “the balance of arguments” as a function parameterized with weights. This allows for using an optimization process to reach one’s desired conclusion more quickly :)
Good fable. If we swap out the diamond macguffin for logic itself, it’s a whole new level of Gödelian pain, can weak bias priors iterations catch this out? Some argue analogue intuitions live through these formal paradox gardens this but my own intuition doubts this… maybe my intuition is too formal, who knows?
Also some “intuitions” are heavily resisted to forgetting about the diamond because they want it badly, and then their measures used to collect data often interfere with the sense of the world and thus reality. I suspect “general intelligence” and “race” are examples of these pursuits (separately and together)(I think they mean smarts and populations but proponents hate that). Thus AGI is a possible goose chase, especially when we are the measure of all things looking for greener pastures. This is how cognitive dissonance is possible in otherwise non-narcissistic members of humanity.
Also, beware of any enterprise that requires new clothes, this applies even if you are not an emperor.
Shiny diamond negligees in particular.
Is there any hope for legal professionals? Attorneys are TRAINED to start with the bottom line, a predetermined conclusion, and then to backfill the reasons. The last thing their clients want them to do is to objectively weigh the evidence and then come to a conclusion.
The only highly educated Flat Earthers I have ever encountered have been attorneys. This, I believe, is not a coincidence.